Attractors in Neural Networks with Innnite Gain
نویسندگان
چکیده
We study a system of equations with discontinuous right hand side, which arise as models of gene and neural networks. Associated to the system is a graph of dynamics, which can be used to deene a Morse decomposition of the invariant set of the ow on the set of rays through the origin ((5]). We study attractors in R 4 which lie in a set of orthants in the form of gure eight. Trajectories can follow one of the two loops of gure eight. We nd that if the attractor is symmetric with respect to these two loops, then the only possible attractor is a periodic orbit which traverses both loops once. We show that without the symmetry the set of admissible attractors include periodic orbits which follow one loop k times and other loop once, for any k. However, we also show that no trajectory in an attractor can traverse both loops more then once in a row.
منابع مشابه
Prediction of Gain in LD-CELP Using Hybrid Genetic/PSO-Neural Models
In this paper, the gain in LD-CELP speech coding algorithm is predicted using three neural models, that are equipped by genetic and particle swarm optimization (PSO) algorithms to optimize the structure and parameters of neural networks. Elman, multi-layer perceptron (MLP) and fuzzy ARTMAP are the candidate neural models. The optimized number of nodes in the first and second hidden layers of El...
متن کاملPrediction of Gain in LD-CELP Using Hybrid Genetic/PSO-Neural Models
In this paper, the gain in LD-CELP speech coding algorithm is predicted using three neural models, that are equipped by genetic and particle swarm optimization (PSO) algorithms to optimize the structure and parameters of neural networks. Elman, multi-layer perceptron (MLP) and fuzzy ARTMAP are the candidate neural models. The optimized number of nodes in the first and second hidden layers of El...
متن کاملTechreport Ncrg/97/025. under Review for Neural Computation Computation with Innnite Neural Networks
For neural networks with a wide class of weight priors, it can be shown that in the limit of an innnite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. This allows predictions to be made eeciently using ne...
متن کاملComputing with Innnite Networks
For neural networks with a wide class of weight-priors, it can be shown that in the limit of an innnite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. This allows predictions to be made eeciently using ne...
متن کاملModels of Innate Neural Attractors and Their Applications for Neural Information Processing
In this work we reveal and explore a new class of attractor neural networks, based on inborn connections provided by model molecular markers, the molecular marker based attractor neural networks (MMBANN). Each set of markers has a metric, which is used to make connections between neurons containing the markers. We have explored conditions for the existence of attractor states, critical relation...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999